Lumping the States of a Finite Markov Chain through Stochastic Factorization
نویسندگان
چکیده
In this work we show how the lumping of states of a finite Markov chain can be regarded as a special decomposition of its transition matrix called stochastic factorization. The idea is simple: when a transition matrix is factored into the product of two stochastic matrices, one can swap the factors of the multiplication to obtain another model, potentially much smaller than the original one. We prove in the paper that the smaller Markov chain has the same reducibility and the same number of closed sets as the original model. Additionally, the stationary distributions of both chains are related through a linear transformation. By interpreting the lumping of states as a particular case of stochastic factorization, we discuss in which circumstances the lumped transition matrix can be used in place of the original one to compute its stationary distribution. To illustrate our ideas we use the computation of Google’s PageRank as an example.
منابع مشابه
Relative Entropy Rate between a Markov Chain and Its Corresponding Hidden Markov Chain
In this paper we study the relative entropy rate between a homogeneous Markov chain and a hidden Markov chain defined by observing the output of a discrete stochastic channel whose input is the finite state space homogeneous stationary Markov chain. For this purpose, we obtain the relative entropy between two finite subsequences of above mentioned chains with the help of the definition of...
متن کاملArrival probability in the stochastic networks with an established discrete time Markov chain
The probable lack of some arcs and nodes in the stochastic networks is considered in this paper, and its effect is shown as the arrival probability from a given source node to a given sink node. A discrete time Markov chain with an absorbing state is established in a directed acyclic network. Then, the probability of transition from the initial state to the absorbing state is computed. It is as...
متن کاملDistribution of First Passage Times for Lumped States in Markov Chains
First passage time in Markov chains is defined as the first time that a chain passes a specified state or lumped states. This state or lumped states may indicate first passage time of an interesting, rare and amazing event. In this study, obtaining distribution of the first passage time relating to lumped states which are constructed by gathering the states through lumping method for a irreduci...
متن کاملLumpings of Markov Chains, Entropy Rate Preservation, and Higher-Order Lumpability
A lumping of a Markov chain is a coordinate-wise projection of the chain. We characterise the entropy rate preservation of a lumping of an aperiodic and irreducible Markov chain on a finite state space by the random growth rate of the cardinality of the realisable preimage of a finite-length trajectory of the lumped chain and by the information needed to reconstruct original trajectories from t...
متن کاملOptimal state-space lumping in Markov chains
We prove that the optimal lumping quotient of a finite Markov chain can be constructd in O(m lg n) time, where n is the number of states and m is the number of transitions. The proof relies on the use of splay trees [18] to sort transition weights.
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2011